Back

Journal of Experimental Psychology: Human Perception and Performance

American Psychological Association (APA)

All preprints, ranked by how well they match Journal of Experimental Psychology: Human Perception and Performance's content profile, based on 10 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
Perceptual Decoys Do Not Reliably Bias Choice: Boundary-Condition Evidence

Ibarra, D.; Suri, G.

2025-11-04 neuroscience 10.1101/2025.11.02.686051 medRxiv
Top 0.1%
9.9%
Show abstract

The decoy effect occurs when adding an inferior third option biases choice between two others, even though the decoy is rarely chosen. While robust in value-based decisions, evidence in perceptual tasks is mixed. Using the rhesus-macaque paradigm from Parrish et al. (2015), we tested whether a perceptual decoy effect generalizes to humans. Participants (n = 50) completed 400 trials. Contrary to our preregistered prediction, we found no reliable decoy effect. Accuracy improved on the hardest trials (Level 1) when a decoy was present, response times were slower in decoy conditions than baseline, and accuracy was higher for tall versus wide rectangles, consistent with the vertical-horizontal asymmetry. The relatively wide spacing of stimuli may have reduced grouping and attentional clustering; because spacing was not manipulated, this remains a hypothesis for future tests. Results suggest that context effects in perceptual choice operate under narrower boundary conditions than in value-based domains.

2
Converting color memory toward a spatial format to benefit behavior

Rawal, A.; Wolff, M. J.; Rademaker, R. L.

2026-02-27 neuroscience 10.64898/2026.02.27.708515 medRxiv
Top 0.1%
6.0%
Show abstract

Visual working memory allows for the brief maintenance of information to serve behavioral goals. It has been shown that when the specific action required to serve a future goal is predictable, people can flexibly change a visual memory representation to incorporate an action-based one, demonstrating the goal-oriented nature of visual working memory. Can such flexibility also be observed within the visual domain, between color and space? In this eye-tracking study, participants remembered either a centrally presented color or a spatial position around fixation. Critically, when remembering a color the response wheel was either randomly rotated, or shown at a fixed rotation, on every trial. When fixed, every target color could be associated with a predictable position on the wheel during response. Do people incorporate this added spatial information in their behavior? Participants utilized color-space associations when remembering color: Response initiation happened faster when the color wheel was fixed compared to random, irrespective of whether an action could be planned or not. Next, we showed that gaze was biased towards the position of the spatial memory target during the delay, extending previous work on gaze biases. Importantly, also when remembering a color, gaze was biased towards the anticipated position of that color on the response wheel when it was fixed. Together, our results show a behavioral benefit of added spatial information for color memory, and systematic changes in gaze that reflect flexible utilization of space.

3
When exploration replaces storage: how eye movements shape visual working memory

Qais, R.; Knight, R.; Yuval-Greenberg, S.

2026-01-04 neuroscience 10.64898/2026.01.04.697560 medRxiv
Top 0.1%
4.7%
Show abstract

Visual working memory (VWM) is traditionally studied while constraining eye movements and limiting access to visual input, yet in natural vision humans constantly explore and resample their environment. Only a few studies have examined VWM utilization when participants were allowed to interact with the environment and found that participants often preferred to resample their environment rather than rely on VWM storage. However, since eye movements were not controlled in these studies, the link between VWM utilization and free visual exploration remained unknown. In two experiments (N = 40), we investigated how visual exploration shapes reliance on VWM versus perceptual input. Participants searched for a common target across two item sets and could either store multiple items for comparison or repeatedly resample the sets by switching between them. Results revealed that when switching was achieved through eye movements, participants consistently relied more on visual resampling and less on VWM; in contrast, when switching required a manual response, they shifted toward greater VWM use. This pattern persisted even when peripheral input was equated, suggesting that natural exploration through eye movements reduces the cognitive cost of acquiring visual information, leading to a strategic reduction in VWM use. Our findings challenge fixation-based approaches to VWM research and highlight the importance of studying cognition under ecological viewing conditions.

4
It's all about location: reliance on spatial rather than visual context when trying to remember

Taub, K.; Yuval-Greenberg, S.

2023-02-20 neuroscience 10.1101/2023.02.20.529294 medRxiv
Top 0.1%
4.2%
Show abstract

When people try to remember visual information, they often move their eyes similarly to encoding. The mechanism underlying this behavior has not yet been fully understood. Specifically, it is unclear whether the purpose of this behavior is to recreate the visual input produced during encoding, or the motor and spatial elements of encoding. In this experiment, participants (N=40) encoded pairs of greyscale objects, overlaying colored squares. During test, participants were asked about objects orientation, while presented with squares of the same colors, either at the same location (controlled trials) or switched in their locations (test trials) relative to encoding. Results show that during test trials, participants tended to gaze at the square appearing at the location where the remembered object was previously presented, rather than on the square of the same color. This indicates a superiority of motor and spatial elements of eye movements rather than near-peripheral visual cues.

5
An advantage for targets located horizontally to the cued location

Clevenger, J.; Yang, P.-L.; Beck, D. M.

2019-08-20 neuroscience 10.1101/740712 medRxiv
Top 0.1%
3.3%
Show abstract

Over the years a number of researchers have reported enhanced performance of targets located horizontally to a cued location relative to those located vertically. However, many of these reports could stem from a known meridian asymmetry in which stimuli on the horizontal meridian show a performance advantage relative to those on the vertical meridian. Here we show a horizontal advantage for target and cue locations that reside outside the zone of asymmetry; that is, targets that appear horizontal to the cue, but above or below the horizontal meridian, are more accurate than those that appear vertical to the cue, but again either above or below the horizontal meridian (Experiments 1 and 4). This advantage does not extend to non-symmetrically located targets in the opposite hemifield (Experiment 2), nor horizontally located targets within the same hemifield (Experiment 3). These data raise the possibility that display designs in which the target and cue locations are positioned symmetrically across the vertical midline may be underestimating the cue validity effect.

6
Frame Effects Across Space and Time

't Hart, B. M.; Cavanagh, P.

2025-10-28 neuroscience 10.1101/2025.10.24.684468 medRxiv
Top 0.1%
2.9%
Show abstract

When two probes are flashed at different times within a moving frame they can be perceived as dramatically separated from each other even though they are at the same location in the display. This effect suggests that we perceive object position relative to the surrounding frame even when it is moving (Ozkan et al., 2021). Here, 8 experiments reveal new properties of this frame effect. First, the influence of the frame on the perceived probe positions extends beyond its bounding contours by several degrees of visual angle, both in the direction of the frames motion and orthogonal to it. It is also undiminished when the probes and the frame are in different depth planes. However, the influence of the frames motion shows no extension in time - there is no effect on probes presented after the frame is removed and none retroactively before the frame appears either. The frame effect is also driven primarily by the displacement of the frame, not by its motion signals: the effect is stronger for moving bounded frames compared to moving, unbounded random-dot textures. When the bounded region has an internal texture that moves with or against the frames motion or remains static, it is the displacement of the frame that produces the perceived position shifts of the probes, while the effect of the internal motion is mostly suppressed. The frames influence is unaffected by whether the motion is self-initiated or not and does not reduce in strength across 2 hours of testing.

7
Perceiving Material Qualities from Moving Contours

Malik, A.; Yu, Y.; Boyaci, H.; Doerschner, K.

2025-07-24 neuroscience 10.1101/2025.07.21.665763 medRxiv
Top 0.1%
2.9%
Show abstract

While research on the perception of line drawings has long demonstrated the importance of contours in object recognition, recent work shows that contours can also convey material properties. For example, even simple 2D shapes with varying contours have been shown to evoke vivid impressions of different materials (Pinna & Deiana, 2015). However, such static representations capture only a single moment in time. When a material moves, its contours shift, evolve, or deform over time, creating contour motion. Does this contour motion convey diagnostic information about material properties, independent of surface appearance? Existing studies on the role of dynamic cues in material perception either use fully rendered 3D stimuli, where contour motion is confounded with rich surface information, or motion-only displays (dynamic dot stimuli or noise patches), which eliminate surface cues but also lack clearly defined contours. As a result, the relative contribution of contour motion to material perception remains unclear. To address this gap, we measured how human observers perceive materials from dynamic line drawings ("line"), compared to animations of fully textured stimuli that carry optical and motion information ("full"), as well as dynamic dot stimuli ("dot"). Stimuli were rendered versions (full, dot, line) of material animations from five material categories (jelly, liquid, smoke, fabric, and rigid-breakable). In one experiment, participants rated five material attributes (dense, flexible, wobbly, fluid, airy motion), and in a second experiment, participants were asked to choose one of the two materials that is more similar to a third material across all possible combinations. Results from both experiments consistently reveal that 1) Dynamic line drawings vividly convey mechanical material properties, and 2) the similarity in material judgments between line and full conditions was larger than that between dot and full conditions. We conclude that contour motion carries rich information about mechanical material qualities.

8
Nested Contextual change and the temporal compression of episodic memory

Logie, M.; Grasso, C.; van Wassenhove, V.

2026-02-26 neuroscience 10.64898/2026.02.26.708184 medRxiv
Top 0.1%
2.7%
Show abstract

How does the structure of events influence the when and the where of experience in comparison to the what? We developed a novel virtual reality (VR) environment to understand how the quantity of information within nested structures influence participants memory for events. Participants moved through a series of virtual rooms (events) where images (items) appeared in randomised locations on a 3 by 3 grid located on a wall. Participants were asked to remember the what (old/new), when (timeline location), and where (grid location), of the images they experienced. Two types of nested events were tested (6 rooms, each containing 4 images; 3 rooms, each containing 8 images) without a difference in the number of seconds of presentation. We found a strong temporal compression effect at nested levels in which participants remembered early items and events happening later, and later items and events happening earlier, than the original experience. Crucially, presenting four-item events resulted in a greater compression rate than eight-item events. We also found greater temporal distances between pairs of items occurring within eight-item events than pairs of items which occurred on either side of a boundary. Memory for when depends on the compression of information within events.

9
Materials in action: The look and feel of soft

Cavdan, M.; Drewing, K.; Doerschner, K.

2021-01-22 neuroscience 10.1101/2021.01.22.427730 medRxiv
Top 0.1%
2.7%
Show abstract

The softness of objects can be perceived through several senses. For instance, to judge the softness of our cats fur, we do not only look at it, we also run our fingers in idiosyncratic ways through its coat. Recently, we have shown that haptically perceived softness covaries with the compliance, viscosity, granularity, and furriness of materials (Dovencioglu et al.,2020). However, it is unknown whether vision can provide similar information about the various aspects of perceived softness. Here, we investigated this question in an experiment with three conditions: in the haptic condition, blindfolded participants explored materials with their hands, in the visual-static condition participants were presented with close-up photographs of the same materials, and in the visual-dynamic condition participants watched videos of the hand-material interactions that were recorded in the haptic condition. After haptically or visually exploring the materials participants rated them on various attributes. Our results show a high overall perceptual correspondence between the three experimental conditions. With a few exceptions, this correspondence tended to be strongest between haptic and visual-dynamic conditions. These results are discussed with respect to information potentially available through the senses, or through prior experience, when judging the softness of materials.

10
Viewpoint-Dependence and Scene Context Effects Generalize to Depth Rotated 3D Objects.

Kallmayer, A.; Vo, M. L.-H.; Draschkow, D.

2022-11-15 neuroscience 10.1101/2022.11.15.516659 medRxiv
Top 0.1%
2.6%
Show abstract

Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from "accidental" viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0{degrees}, 60{degrees}, 120{degrees}, 180{degrees} 240{degrees}, 300{degrees}) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0{degrees}-rotation) and non-canonical (120{degrees}-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.

11
Shared gaze reflects shared aesthetic experiences

Ekinci, M. A.; Kaiser, D.

2026-02-02 neuroscience 10.64898/2026.01.30.702749 medRxiv
Top 0.1%
2.3%
Show abstract

When individuals view the same visual input, they often differ in their aesthetic appeal judgments, yet why people differ remains largely unclear. Here, we tested whether individual differences in aesthetic experience are linked to differences in visual exploration. In two experiments, participants watched the documentary "Home" while their eye movements were recorded. In Experiment 1, participants continuously rated aesthetic experience throughout the movie, whereas in Experiment 2, they watched the first half without a task and rated aesthetic experience only during the second half. Inter-individual similarity in gaze patterns, assessed using fixation heatmaps across time, predicted similarity in aesthetic appeal judgments in both experiments. Notably, in Experiment 2, gaze similarity during free viewing in the first half of the movie predicted similarity in aesthetic ratings during the second half, indicating that incidental eye movement patterns predict aesthetic experiences. Together, these results show that shared gaze patterns are linked to shared aesthetic experiences under naturalistic, dynamic viewing conditions.

12
Gaze biases can reflect task-specific spatial memorization strategies

Chota, S.; Arora, K.; Kenemans, L.; Gayet, S.; Van der Stigchel, S.

2024-08-30 neuroscience 10.1101/2024.08.30.610231 medRxiv
Top 0.1%
2.3%
Show abstract

Previous work has suggested that small directional eye movements not only reveal the focus of external spatial attention towards visible stimuli, but also accompany shifts of internal attention to stimuli in visual working memory (VWM)(van Ede et al., 2019). When the orientations of two bars are memorized and a subsequent retro-cue indicates which orientation needs to be reported, participants gaze is systematically biased towards the former location of the cued item (Figure 1AB). This finding was interpreted as evidence that the oculomotor system indexes internal attention; that is, attention directed at the location of stimuli that are no longer presented but are maintained in VWM. Importantly, as the location of the bars is presumably not relevant to the memory report, the authors concluded that orientation features in VWM are automatically associated with locations, suggesting that VWM is inherently spatially organized. This conclusion depends on the key assumption that participants indeed memorize and subsequently attend orientation features. Here we re-analyse Experiment 1 by van Ede et al. (2019) and demonstrate that this assumption does not hold. Instead of memorizing orientation features, participants deployed an alternative spatial strategy by memorizing bar endpoints. Although we do not call into question the conclusion that internal attention is inherently spatially organized, our results do imply that directional gaze biases might also reflect attention directed at task-relevant stimulus endpoints, rather than internal attention directed at memorized orientations. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=161 SRC="FIGDIR/small/610231v2_fig1.gif" ALT="Figure 1"> View larger version (43K): org.highwire.dtl.DTLVardef@940e51org.highwire.dtl.DTLVardef@37ec3dorg.highwire.dtl.DTLVardef@176b186org.highwire.dtl.DTLVardef@180e8a7_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOFigure 1.C_FLOATNO Gaze density maps from Experiment 1 by van Ede et al. (2019) (N = 23, trials included = 20.864, 400 to 1000 ms). AB. Original reported effect of cued item location on gaze bias. Calculated by subtracting cued-item-left and cued-item-right gaze density maps. Rectangles indicate used stimulus positions and orientation ranges (min: 20{degrees}, mean: 45{degrees}, max: 70{degrees}; min: 110{degrees}, mean: 135{degrees}, max: 160{degrees}) of bar stimuli. C. Normalized Gaze bias vectors per condition (red dotted lines), horizontal vectors (dotted black lines) and average vectors pointing towards most foveal bar endpoints (solid black lines). Gaze bias vector endpoints were calculated from the centre of mass of each condition, ignoring negative values. Circular t-tests revealed that individual gaze bias vector angles (red dotted lines) were significantly different from horizontal vectors (dotted black lines) but not significantly different from endpoint vectors (solid black lines). FI. Vertical gaze bias revealed by separating trials based on bar orientations. Red dotted lines depict group average gaze bias vectors. F. Both bar endpoints "upwards" (left: 20{degrees} to 70{degrees} right: 110{degrees} to 160{degrees}) minus both bars endpoints "downwards" (left: 110{degrees} to 160{degrees}, right: 20{degrees} to 70{degrees}). I. Both "downwards" minus both "upwards". DEGH. Individual gaze density maps for each attention (left versus right) and bar endpoint direction (upwards versus downwards) separately. Solid black Lines show average vector pointing towards closest 45{degrees}/135{degrees} bar endpoint (i.e., average optimal gaze location for solving the memory task through memory maintenance of a spatial location). Red dotted lines depict group average gaze bias vectors (calculated from the centre of mass of each condition, ignoring negative values). C_FIG

13
Cognitive Load Impairs Professional Scepticism in Decision-Making: The Mitigating Role of Default Nudges

Erfanian, M.; Meunier, L.; Gajewski, J.-F.

2025-05-10 neuroscience 10.1101/2025.05.05.652071 medRxiv
Top 0.1%
2.1%
Show abstract

Cognitive overload can impair professional scepticism in high-stakes contexts such as auditing. In these settings, sustaining professional scepticism is essential. Default nudges, or pre-selected options, may offset these effects by reducing cognitive demands. We conducted two online experiments to examine how cognitive load and default nudges influence professional scepticism in auditing decisions. Experiment 1 validated a dot memory task manipulation of cognitive load and identified low and high load conditions for subsequent testing. Experiment 2 embedded this manipulation in Phillips audit task, used for measuring professional scepticism in audit. Results showed that cognitive load slowed responses and reduced accuracy. Default nudges accelerated responding and improved accuracy under load, but only when aligned with the most probable response; misaligned nudges reduced accuracy. These findings suggest that defaults act as conditional scaffolds under cognitive strain, supporting judgment and decision-making in some contexts but introducing risks in others. Misaligned defaults reduced accuracy, indicating that they can exploit intuitive responding rather than enhance deliberation.

14
Object speed and distractor number do not affect attentional allocation in multiple object tracking

Adamian, N.; Akalan, F.; Andersen, S. K.

2025-12-28 neuroscience 10.64898/2025.12.28.696734 medRxiv
Top 0.1%
2.0%
Show abstract

Keeping track of multiple moving objects across dynamic real-world scenarios such as driving, team sports, or crowded social environments is a fundamental challenge for visual attention. We have previously demonstrated that as the number of tracked objects increases, the strength of attentional facilitation allocated to each individual object decreases, limiting tracking success. It is also well established that beyond the number of tracked objects, faster-moving objects and objects embedded amongst higher numbers of distractors are more difficult to track. Are these effects on tracking difficulty also mediated by less effective allocation of attention to tracked targets as in the case of tracking more targets? If so, one should expect the strength of attentional modulation to drop systematically with increasing speed and total number of moving stimuli. In two experiments (total n = 70), participants were instructed to track moving targets amongst identical distractors while we manipulated object speed (Experiment 1) and number (Experiment 2). As expected, tracking performance declined with both manipulations. However, steady-state visual evoked potentials (SSVEPs) recorded during successful tracking revealed that attentional enhancement of tracked targets compared with distractors did not drop with increasing speed or object number. In summary, bottom-up changes in the stimulus display and top-down attentional manipulations affect tracking performance in independent ways, with the balance between strength of attentional allocation and bottom-up demands of the task determining successful tracking. The allocation of attention itself seems to be determined exclusively by top-down goals rather than being reactive to bottom-up display characteristics. Open Practices StatementParticipant level data and analysis code for all experiments are available at (https://osf.io/ypgfs/) and Experiment 1 was preregistered (https://osf.io/pxh25/). Significance statementKeeping track of multiple moving objects is fundamental to navigating dynamic real-world scenarios. This ability is accomplished through multifocal attentional selection, which weakens as the number of tracked targets increases. This study asks whether other stimulus manipulations increase tracking difficulty by diluting attentional allocation. Using steady-state visual evoked potentials to measure selective attention during tracking, we demonstrate that both increases in speed and distractor number impair performance, however, they do not affect attentional enhancement of targets. This suggests that top-down attentional control operates independently from bottom-up demands.

15
Not so automatic: Task relevance and perceptual load modulate cross-modal semantic congruence effects on spatial orienting

Kvasova, D.; Soto-Faraco, S.

2019-11-05 neuroscience 10.1101/830679 medRxiv
Top 0.1%
2.0%
Show abstract

Recent studies show that cross-modal semantic congruence plays a role in spatial attention orienting and visual search. However, the extent to which these cross-modal semantic relationships attract attention automatically is still unclear. At present the outcomes of different studies have been inconsistent. Variations in task-relevance of the cross-modal stimuli (from explicitly needed, to completely irrelevant) and the amount of perceptual load may account for the mixed results of previous experiments. In the present study, we addressed the effects of audio-visual semantic congruence on visuo-spatial attention across variations in task relevance and perceptual load. We used visual search amongst images of common objects paired with characteristic object sounds (e.g., guitar image and chord sound). We found that audio-visual semantic congruence speeded visual search times when the cross-modal objects are task relevant, or when they are irrelevant but presented under low perceptual load. Instead, when perceptual load is high, sounds fail to attract attention towards the congruent visual images. These results lead us to conclude that object-based crossmodal congruence does not attract attention automatically and requires some top-down processing.

16
Transfer of symbolic numeral adaptation across eyes and hemifields

Nakamura, A.; Luo, J.; Yokoi, I.; Takemura, H.

2026-03-12 neuroscience 10.64898/2026.03.10.710478 medRxiv
Top 0.1%
1.8%
Show abstract

Visual perception of symbolic numerals is essential for everyday tasks; however, the neural and perceptual mechanisms underlying this ability remain unclear. Partially occluded digital numerals can elicit bistable perception, and adaptation to symbolic numerals alters the perception of these ambiguous stimuli. We aimed to examine how symbolic numeral adaptation is related to hierarchical visual processing by testing its interocular and interhemifield transfer. Experiment 1 tested interocular transfer by presenting the test stimulus to either the same or opposite eye as the adaptation stimulus. Experiment 2 assessed interhemifield transfer by presenting the test stimulus to either the same or opposite hemifield as the adaptation stimulus. Experiment 3 examined the interhemifield transfer of adaptation confined to the upper parts of digital numerals. Our results showed that adaptation to digital numerals induced shifted perceptual interpretations that transferred across eyes. In addition, we found that adaptation to digital numerals induced a relatively small but statistically significant interhemifield transfer. In contrast, adaptation restricted to the upper parts of digital numerals showed no significant interhemifield transfer. These findings suggest that the perceptual interpretation of symbolic numerals involves visual processing stages that integrate information across the eyes and hemifields.

17
Behavioral and Eye-Tracking Evidence for Disrupted Event Segmentation during Continuous Memory Encoding Due to Short Video Watching

Li, H.; Li, J.; Hao, X.; Liu, W.

2024-08-19 neuroscience 10.1101/2024.08.17.608429 medRxiv
Top 0.1%
1.8%
Show abstract

The proliferation of short-video platforms prompts critical investigation of their effects on human cognitive functions. We hypothesized that the frequent, user-driven content shifts inherent to short-video watching impair event segmentation--a cognitive process critical for organizing continuous experience into discrete, episodic memory. To investigate this hypothesis, we combined behavioral memory tasks, eye-tracking, and self-report questionnaires. Study 1 (N=113) revealed that exposure to randomly selected short videos impaired subsequent memory for continuous movies. This impairment was not observed following exposure to personalized short videos, nor was it present in trial-based static image encoding tasks (Study2, N=60), suggesting a selective disruption of continuous memory encoding. Intersubject correlation (ISC) analysis of eye movements revealed decreased synchronization at event boundaries during movie watching after exposure to random short videos. Furthermore, the Hidden Markov Model (HMM) analysis indicated that this exposure led to more fragmented event segmentation during continuous memory encoding. In contrast, while pupil size and gaze moving speed were sensitive to event boundaries, these metrics were not modulated by prior short video watching, indicating the disruption is specific to the segmentation process itself and not to lower-level boundary detection. Collectively, these findings demonstrate a negative impact of certain short video watching habits on event segmentation and subsequent memory, underscoring the powerful role of platform algorithms in shaping human cognition.

18
Individual differences in visual search performance extend from artificial arrays to naturalistic environments.

Botch, T. L.; Garcia, B. D.; Choi, Y. B.; Robertson, C. E.

2021-10-16 neuroscience 10.1101/2021.10.15.464609 medRxiv
Top 0.1%
1.8%
Show abstract

Visual search is a universal human activity in naturalistic environments. Traditionally, visual search is investigated under tightly controlled conditions, where head-restricted participants locate a minimalistic target in a cluttered array presented on a computer screen. Do classic findings of visual search extend to naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality (VR) technology to relate individual differences in classic visual search paradigms to naturalistic search behavior. In a naturalistic visual search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic visual search task, participants searched for a target within a simple array of colored letters using only eye-movements. We tested how set size, a property known to limit visual search within computer displays, predicts the efficiency of search behavior inside immersive, real-world scenes that vary in levels of visual clutter. We found that participants search performance was impacted by the level of visual clutter within real-world scenes. Critically, we also observed that individual differences in vi1-3sual search efficiency in classic search predicted efficiency in real-world search, but only when the comparison was limited to the forward-facing field of view for real-world search. These results demonstrate that set size is a reliable predictor of individual performance across computer-based and active, real-world visual search behavior.

19
Comparing auditory and visual aspects of multisensory working memory using bimodally matched feature patterns

Turpin, T.; Uluc, I.; Lankinen, K.; Mamashli, F.; Ahveninen, J.

2023-12-04 neuroscience 10.1101/2023.08.03.551865 medRxiv
Top 0.1%
1.7%
Show abstract

Working memory (WM) reflects the transient maintenance of information in the absence of external input, which can be attained via multiple senses separately or simultaneously. Pertaining to WM, the prevailing literature suggests the dominance of vision over other sensory systems. However, this imbalance may be stemming from challenges in finding comparable stimuli across modalities. Here, we addressed this problem by using a balanced multisensory retro-cue WM design, which employed combinations of auditory (ripple sounds) and visuospatial (Gabor patches) patterns, adjusted relative to each participants discrimination ability. In three separate experiments, the participant was asked to determine whether the (retro-cued) auditory and/or visual items maintained in WM matched or mismatched the subsequent probe stimulus. In Experiment 1, all stimuli were audiovisual, and the probes were either fully mismatching, only partially mismatching, or fully matching the memorized item. Experiment 2 was otherwise same as Experiment 1, but the probes were unimodal. In Experiment 3, the participant was cued to maintain only the auditory or visual aspect of an audiovisual item pair. In two of the three experiments, the participant matching performance was significantly more accurate for the auditory than visual attributes of probes. When the perceptual and task demands are bimodally equated, auditory attributes can be matched to multisensory items in WM at least as accurately as, if not more precisely than, their visual counterparts.

20
Context-dependent benefits of training and reminders in visual skill learning

Kolken, Y. J. T.; Roberts, M. J.; Naseri, P.; Martino, F. D.; Censor, N.; Weerd, P. D.

2025-11-14 neuroscience 10.1101/2025.11.14.688438 medRxiv
Top 0.1%
1.7%
Show abstract

Previous studies using a visual texture discrimination task (TDT) have demonstrated that performance enhancements resulting from extensive daily training (full training condition) remained intact after replacing all training, except for the first and last session, with a few daily reminder trials (reminder condition). Omitting reminders (control condition) yielded only limited learning, supporting their crucial contribution. We first confirmed these findings and excluded gaze position differences among conditions as a contributing factor. Next, we tested whether the reminders effectiveness is specific to a context of limited attention to the peripheral target caused by simultaneously performing a demanding fixation task. Removing the fixation task yielded performance levels in the first session matching those normally reached after lengthy daily training, suggesting that learning in the standard TDT involves the redeployment of attention. After changing texture parameters to increase the difficulty of the task, performing the TDT without a fixation task yielded learning in all three conditions. This indicates that in a dual-task, reminders can produce learning outcomes comparable to full training. In contrast, when the TDT is performed with full attention to the target, consolidation of the initial session alone can yield improvements equivalent to those observed in reminder and full training conditions.